Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Comput ; 36(6): 1041-1083, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38669693

RESUMO

We consider a model of basic inner retinal connectivity where bipolar and amacrine cells interconnect and both cell types project onto ganglion cells, modulating their response output to the brain visual areas. We derive an analytical formula for the spatiotemporal response of retinal ganglion cells to stimuli, taking into account the effects of amacrine cells inhibition. This analysis reveals two important functional parameters of the network: (1) the intensity of the interactions between bipolar and amacrine cells and (2) the characteristic timescale of these responses. Both parameters have a profound combined impact on the spatiotemporal features of retinal ganglion cells' responses to light. The validity of the model is confirmed by faithfully reproducing pharmacogenetic experimental results obtained by stimulating excitatory DREADDs (Designer Receptors Exclusively Activated by Designer Drugs) expressed on ganglion cells and amacrine cells' subclasses, thereby modifying the inner retinal network activity to visual stimuli in a complex, entangled manner. Our mathematical model allows us to explore and decipher these complex effects in a manner that would not be feasible experimentally and provides novel insights in retinal dynamics.


Assuntos
Retina , Células Ganglionares da Retina , Células Ganglionares da Retina/fisiologia , Retina/fisiologia , Animais , Modelos Neurológicos , Células Amácrinas/fisiologia , Simulação por Computador , Humanos , Vias Visuais/fisiologia , Estimulação Luminosa/métodos , Rede Nervosa/fisiologia , Campos Visuais/fisiologia , Células Bipolares da Retina/fisiologia
2.
Open Biol ; 12(3): 210367, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35259949

RESUMO

Retinal neurons are remarkedly diverse based on structure, function and genetic identity. Classifying these cells is a challenging task, requiring multimodal methodology. Here, we introduce a novel approach for retinal ganglion cell (RGC) classification, based on pharmacogenetics combined with immunohistochemistry and large-scale retinal electrophysiology. Our novel strategy allows grouping of cells sharing gene expression and understanding how these cell classes respond to basic and complex visual scenes. Our approach consists of several consecutive steps. First, the spike firing frequency is increased in RGCs co-expressing a certain gene (Scnn1a or Grik4) using excitatory DREADDs (designer receptors exclusively activated by designer drugs) in order to single out activity originating specifically from these cells. Their spike location is then combined with post hoc immunostaining, to unequivocally characterize their anatomical and functional features. We grouped these isolated RGCs into multiple clusters based on spike train similarities. Using this novel approach, we were able to extend the pre-existing list of Grik4-expressing RGC types to a total of eight and, for the first time, we provide a phenotypical description of 13 Scnn1a-expressing RGCs. The insights and methods gained here can guide not only RGC classification but neuronal classification challenges in other brain regions as well.


Assuntos
Retina , Células Ganglionares da Retina , Encéfalo , Células Ganglionares da Retina/metabolismo
3.
J Neurophysiol ; 127(5): 1334-1347, 2022 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-35235437

RESUMO

Computing the spike-triggered average (STA) is a simple method to estimate linear receptive fields (RFs) in sensory neurons. For random, uncorrelated stimuli, the STA provides an unbiased RF estimate, but in practice, white noise at high resolution is not an optimal stimulus choice as it usually evokes only weak responses. Therefore, for a visual stimulus, images of randomly modulated blocks of pixels are often used. This solution naturally limits the resolution at which an RF can be measured. Here, we present a simple super-resolution technique that can overcome these limitations. We define a novel stimulus type, the shifted white noise (SWN), by introducing random spatial shifts in the usual stimulus to increase the resolution of the measurements. In simulated data, we show that the average error using the SWN was 1.7 times smaller than when using the classical stimulus, with successful mapping of 2.3 times more neurons, covering a broader range of RF sizes. Moreover, successful RF mapping was achieved with brief recordings of light responses, lasting only about 1 min of activity, which is more than 10 times more efficient than the classical white noise stimulus. In recordings from mouse retinal ganglion cells with large scale multielectrode arrays, we successfully mapped 21 times more RFs than when using the traditional white noise stimuli. In summary, randomly shifting the usual white noise stimulus significantly improves RFs estimation, and requires only short recordings.NEW & NOTEWORTHY We present a novel approach to measure receptive fields in large and heterogeneous populations of sensory neurons recorded with large-scale, high-density multielectrode arrays. Our approach leverages super-resolution principles to improve the yield of the spike-triggered average method. By simply designing a new stimulus, we provide experimentalists with a new and fast technique to simultaneously detect more receptive fields at higher resolution in population of hundreds to thousands of neurons.


Assuntos
Células Ganglionares da Retina , Animais , Camundongos , Estimulação Luminosa , Células Ganglionares da Retina/fisiologia
4.
J Imaging ; 8(1)2022 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-35049855

RESUMO

The retina is the entrance of the visual system. Although based on common biophysical principles, the dynamics of retinal neurons are quite different from their cortical counterparts, raising interesting problems for modellers. In this paper, I address some mathematically stated questions in this spirit, discussing, in particular: (1) How could lateral amacrine cell connectivity shape the spatio-temporal spike response of retinal ganglion cells? (2) How could spatio-temporal stimuli correlations and retinal network dynamics shape the spike train correlations at the output of the retina? These questions are addressed, first, introducing a mathematically tractable model of the layered retina, integrating amacrine cells' lateral connectivity and piecewise linear rectification, allowing for computing the retinal ganglion cells receptive field together with the voltage and spike correlations of retinal ganglion cells resulting from the amacrine cells networks. Then, I review some recent results showing how the concept of spatio-temporal Gibbs distributions and linear response theory can be used to characterize the collective spike response to a spatio-temporal stimulus of a set of retinal ganglion cells, coupled via effective interactions corresponding to the amacrine cells network. On these bases, I briefly discuss several potential consequences of these results at the cortical level.

5.
J Math Neurosci ; 11(1): 3, 2021 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-33420903

RESUMO

We analyse the potential effects of lateral connectivity (amacrine cells and gap junctions) on motion anticipation in the retina. Our main result is that lateral connectivity can-under conditions analysed in the paper-trigger a wave of activity enhancing the anticipation mechanism provided by local gain control (Berry et al. in Nature 398(6725):334-338, 1999; Chen et al. in J. Neurosci. 33(1):120-132, 2013). We illustrate these predictions by two examples studied in the experimental literature: differential motion sensitive cells (Baccus and Meister in Neuron 36(5):909-919, 2002) and direction sensitive cells where direction sensitivity is inherited from asymmetry in gap junctions connectivity (Trenholm et al. in Nat. Neurosci. 16:154-156, 2013). We finally present reconstructions of retinal responses to 2D visual inputs to assess the ability of our model to anticipate motion in the case of three different 2D stimuli.

6.
Entropy (Basel) ; 23(2)2021 Jan 27.
Artigo em Inglês | MEDLINE | ID: mdl-33514033

RESUMO

We establish a general linear response relation for spiking neuronal networks, based on chains with unbounded memory. This relation allow us to predict the influence of a weak amplitude time dependent external stimuli on spatio-temporal spike correlations, from the spontaneous statistics (without stimulus) in a general context where the memory in spike dynamics can extend arbitrarily far in the past. Using this approach, we show how the linear response is explicitly related to the collective effect of the stimuli, intrinsic neuronal dynamics, and network connectivity on spike train statistics. We illustrate our results with numerical simulations performed over a discrete time integrate and fire model.

7.
Entropy (Basel) ; 22(11)2020 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-33266513

RESUMO

The Thermodynamic Formalism provides a rigorous mathematical framework for studying quantitative and qualitative aspects of dynamical systems. At its core, there is a variational principle that corresponds, in its simplest form, to the Maximum Entropy principle. It is used as a statistical inference procedure to represent, by specific probability measures (Gibbs measures), the collective behaviour of complex systems. This framework has found applications in different domains of science. In particular, it has been fruitful and influential in neurosciences. In this article, we review how the Thermodynamic Formalism can be exploited in the field of theoretical neuroscience, as a conceptual and operational tool, in order to link the dynamics of interacting neurons and the statistics of action potentials from either experimental data or mathematical models. We comment on perspectives and open problems in theoretical neuroscience that could be addressed within this formalism.

8.
Front Syst Neurosci ; 14: 20, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32362815

RESUMO

Functionally relevant network patterns form transiently in brain activity during rest, where a given subset of brain areas exhibits temporally synchronized BOLD signals. To adequately assess the biophysical mechanisms governing intrinsic brain activity, a detailed characterization of the dynamical features of functional networks is needed from the experimental side to constrain theoretical models. In this work, we use an open-source fMRI dataset from 100 healthy participants from the Human Connectome Project and analyze whole-brain activity using Leading Eigenvector Dynamics Analysis (LEiDA), which serves to characterize brain activity at each time point by its whole-brain BOLD phase-locking pattern. Clustering these BOLD phase-locking patterns into a set of k states, we demonstrate that the cluster centroids closely overlap with reference functional subsystems. Borrowing tools from dynamical systems theory, we characterize spontaneous brain activity in the form of trajectories within the state space, calculating the Fractional Occupancy and the Dwell Times of each state, as well as the Transition Probabilities between states. Finally, we demonstrate that within-subject reliability is maximized when including the high frequency components of the BOLD signal (>0.1 Hz), indicating the existence of individual fingerprints in dynamical patterns evolving at least as fast as the temporal resolution of acquisition (here TR = 0.72 s). Our results reinforce the mechanistic scenario that resting-state networks are the expression of erratic excursions from a baseline synchronous steady state into weakly-stable partially-synchronized states - which we term ghost attractors. To better understand the rules governing the transitions between ghost attractors, we use methods from dynamical systems theory, giving insights into high-order mechanisms underlying brain function.

9.
Chaos ; 29(10): 103105, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31675822

RESUMO

We review two examples where the linear response of a neuronal network submitted to an external stimulus can be derived explicitly, including network parameters dependence. This is done in a statistical physicslike approach where one associates, to the spontaneous dynamics of the model, a natural notion of Gibbs distribution inherited from ergodic theory or stochastic processes. These two examples are the Amari-Wilson-Cowan model [S. Amari, Syst. Man Cybernet. SMC-2, 643-657 (1972); H. R. Wilson and J. D. Cowan, Biophys. J. 12, 1-24 (1972)] and a conductance based Integrate and Fire model [M. Rudolph and A. Destexhe, Neural Comput. 18, 2146-2210 (2006); M. Rudolph and A. Destexhe, Neurocomputing 70(10-12), 1966-1969 (2007)].

10.
Sci Rep ; 9(1): 1859, 2019 02 12.
Artigo em Inglês | MEDLINE | ID: mdl-30755684

RESUMO

During early development, waves of activity propagate across the retina and play a key role in the proper wiring of the early visual system. During a particular phase of the retina development (stage II) these waves are triggered by a transient network of neurons, called Starburst Amacrine Cells (SACs), showing a bursting activity which disappears upon further maturation. The underlying mechanisms of the spontaneous bursting and the transient excitability of immature SACs are not completely clear yet. While several models have attempted to reproduce retinal waves, none of them is able to mimic the rhythmic autonomous bursting of individual SACs and reveal how these cells change their intrinsic properties during development. Here, we introduce a mathematical model, grounded on biophysics, which enables us to reproduce the bursting activity of SACs and to propose a plausible, generic and robust, mechanism that generates it. The core parameters controlling repetitive firing are fast depolarizing V-gated calcium channels and hyperpolarizing V-gated potassium channels. The quiescent phase of bursting is controlled by a slow after hyperpolarization (sAHP), mediated by calcium-dependent potassium channels. Based on a bifurcation analysis we show how biophysical parameters, regulating calcium and potassium activity, control the spontaneously occurring fast oscillatory activity followed by long refractory periods in individual SACs. We make a testable experimental prediction on the role of voltage-dependent potassium channels on the excitability properties of SACs and on the evolution of this excitability along development. We also propose an explanation on how SACs can exhibit a large variability in their bursting periods, as observed experimentally within a SACs network as well as across different species, yet based on a simple, unique, mechanism. As we discuss, these observations at the cellular level have a deep impact on the retinal waves description.


Assuntos
Modelos Teóricos , Retina/embriologia , Células Ganglionares da Retina/fisiologia , Algoritmos , Células Amácrinas/fisiologia , Animais , Cálcio/fisiologia , Calmodulina/fisiologia , Cinética , Distribuição Normal , Oscilometria , Canais de Potássio/fisiologia , Retina/fisiologia , Vias Visuais/fisiologia
11.
Front Neuroinform ; 11: 49, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28919854

RESUMO

The retina encodes visual scenes by trains of action potentials that are sent to the brain via the optic nerve. In this paper, we describe a new free access user-end software allowing to better understand this coding. It is called PRANAS (https://pranas.inria.fr), standing for Platform for Retinal ANalysis And Simulation. PRANAS targets neuroscientists and modelers by providing a unique set of retina-related tools. PRANAS integrates a retina simulator allowing large scale simulations while keeping a strong biological plausibility and a toolbox for the analysis of spike train population statistics. The statistical method (entropy maximization under constraints) takes into account both spatial and temporal correlations as constraints, allowing to analyze the effects of memory on statistics. PRANAS also integrates a tool computing and representing in 3D (time-space) receptive fields. All these tools are accessible through a friendly graphical user interface. The most CPU-costly of them have been implemented to run in parallel.

12.
Sci Rep ; 7: 42330, 2017 02 10.
Artigo em Inglês | MEDLINE | ID: mdl-28186129

RESUMO

We have investigated the ontogeny of light-driven responses in mouse retinal ganglion cells (RGCs). Using a large-scale, high-density multielectrode array, we recorded from hundreds to thousands of RGCs simultaneously at pan-retinal level, including dorsal and ventral locations. Responses to different contrasts not only revealed a complex developmental profile for ON, OFF and ON-OFF responses, but also unveiled differences between dorsal and ventral RGC responses. At eye-opening, dorsal RGCs of all types were more responsive to light, perhaps indicating an environmental priority to nest viewing for pre-weaning pups. The developmental profile of ON and OFF responses exhibited antagonistic behaviour, with the strongest ON responses shortly after eye-opening, followed by an increase in the strength of OFF responses later on. Further, we found that with maturation receptive field (RF) center sizes decrease, spike-triggered averaged responses to white noise become stronger, and centers become more circular while maintaining differences between RGC types. We conclude that the maturation of retinal functionality is not spatially homogeneous, likely reflecting ecological requirements that favour earlier maturation of the dorsal retina.


Assuntos
Células Ganglionares da Retina/citologia , Células Ganglionares da Retina/efeitos da radiação , Potenciais de Ação/efeitos da radiação , Envelhecimento/fisiologia , Animais , Eletrodos , Camundongos Endogâmicos C57BL , Células Ganglionares da Retina/fisiologia , Fatores de Tempo
13.
Neural Comput ; 29(1): 146-170, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-27764593

RESUMO

We initiate a mathematical analysis of hidden effects induced by binning spike trains of neurons. Assuming that the original spike train has been generated by a discrete Markov process, we show that binning generates a stochastic process that is no longer Markov but is instead a variable-length Markov chain (VLMC) with unbounded memory. We also show that the law of the binned raster is a Gibbs measure in the DLR (Dobrushin-Lanford-Ruelle) sense coined in mathematical statistical mechanics. This allows the derivation of several important consequences on statistical properties of binned spike trains. In particular, we introduce the DLR framework as a natural setting to mathematically formalize anticipation, that is, to tell "how good" our nervous system is at making predictions. In a probabilistic sense, this corresponds to condition a process by its future, and we discuss how binning may affect our conclusions on this ability. We finally comment on the possible consequences of binning in the detection of spurious phase transitions or in the detection of incorrect evidence of criticality.

14.
J Neurosci ; 33(38): 15032-43, 2013 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-24048833

RESUMO

Homeostatic intrinsic plasticity (HIP) is a ubiquitous cellular mechanism regulating neuronal activity, cardinal for the proper functioning of nervous systems. In invertebrates, HIP is critical for orchestrating stereotyped activity patterns. The functional impact of HIP remains more obscure in vertebrate networks, where higher order cognitive processes rely on complex neural dynamics. The hypothesis has emerged that HIP might control the complexity of activity dynamics in recurrent networks, with important computational consequences. However, conflicting results about the causal relationships between cellular HIP, network dynamics, and computational performance have arisen from machine-learning studies. Here, we assess how cellular HIP effects translate into collective dynamics and computational properties in biological recurrent networks. We develop a realistic multiscale model including a generic HIP rule regulating the neuronal threshold with actual molecular signaling pathways kinetics, Dale's principle, sparse connectivity, synaptic balance, and Hebbian synaptic plasticity (SP). Dynamic mean-field analysis and simulations unravel that HIP sets a working point at which inputs are transduced by large derivative ranges of the transfer function. This cellular mechanism ensures increased network dynamics complexity, robust balance with SP at the edge of chaos, and improved input separability. Although critically dependent upon balanced excitatory and inhibitory drives, these effects display striking robustness to changes in network architecture, learning rates, and input features. Thus, the mechanism we unveil might represent a ubiquitous cellular basis for complex dynamics in neural networks. Understanding this robustness is an important challenge to unraveling principles underlying self-organization around criticality in biological recurrent neural networks.


Assuntos
Simulação por Computador , Homeostase/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Dinâmica não Linear , Potenciais de Ação/fisiologia , Animais , Humanos , Rede Nervosa/fisiologia , Redes Neurais de Computação , Sinapses/fisiologia
15.
J Physiol Paris ; 105(1-3): 91-7, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21964248

RESUMO

This paper presents a numerical analysis of the role of asymptotic dynamics in the design of hardware-based implementations of the generalised integrate-and-fire (gIF) neuron models. These proposed implementations are based on extensions of the discrete-time spiking neuron model, which was introduced by Soula et al., and have been implemented on Field Programmable Gate Array (FPGA) devices using fixed-point arithmetic. Mathematical studies conducted by Cessac have evidenced the existence of three main regimes (neural death, periodic and chaotic regimes) in the activity of such neuron models. These activity regimes are characterised in hardware by considering a precision analysis in the design of an architecture for an FPGA-based implementation. The proposed approach, although based on gIF neuron models and FPGA hardware, can be extended to more complex neuron models as well as to different in silico implementations.


Assuntos
Simulação por Computador , Modelos Neurológicos , Redes Neurais de Computação , Neurônios , Potenciais de Ação/fisiologia
16.
J Math Neurosci ; 1(1): 8, 2011 Aug 25.
Artigo em Inglês | MEDLINE | ID: mdl-22657160

RESUMO

We consider a conductance-based neural network inspired by the generalized Integrate and Fire model introduced by Rudolph and Destexhe in 1996. We show the existence and uniqueness of a unique Gibbs distribution characterizing spike train statistics. The corresponding Gibbs potential is explicitly computed. These results hold in the presence of a time-dependent stimulus and apply therefore to non-stationary dynamics.

17.
J Physiol Paris ; 104(1-2): 5-18, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-19925865

RESUMO

In the present overview, our wish is to demystify some aspects of coding with spike-timing, through a simple review of well-understood technical facts regarding spike coding. Our goal is a better understanding of the extent to which computing and modeling with spiking neuron networks might be biologically plausible and computationally efficient. We intentionally restrict ourselves to a deterministic implementation of spiking neuron networks and we consider that the dynamics of a network is defined by a non-stochastic mapping. By staying in this rather simple framework, we are able to propose results, formula and concrete numerical values, on several topics: (i) general time constraints, (ii) links between continuous signals and spike trains, (iii) spiking neuron networks parameter adjustment. Beside an argued review of several facts and issues about neural coding by spikes, we propose new results, such as a numerical evaluation of the most critical temporal variables that schedule the progress of realistic spike trains. When implementing spiking neuron networks, for biological simulation or computational purpose, it is important to take into account the indisputable facts here unfolded. This precaution could prevent one from implementing mechanisms that would be meaningless relative to obvious time constraints, or from artificially introducing spikes when continuous calculations would be sufficient and more simple. It is also pointed out that implementing a large-scale spiking neuron network is finally a simple task.


Assuntos
Potenciais de Ação/fisiologia , Simulação por Computador , Modelos Neurológicos , Neurônios/fisiologia , Animais , Humanos , Rede Nervosa/fisiologia , Redes Neurais de Computação , Dinâmica não Linear , Fatores de Tempo
18.
Artigo em Inglês | MEDLINE | ID: mdl-19255631

RESUMO

We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995): their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales.

19.
Artigo em Inglês | MEDLINE | ID: mdl-18946532

RESUMO

We present a mathematical analysis of networks with integrate-and-fire (IF) neurons with conductance based synapses. Taking into account the realistic fact that the spike time is only known within some finite precision, we propose a model where spikes are effective at times multiple of a characteristic time scale delta, where delta can be arbitrary small (in particular, well beyond the numerical precision). We make a complete mathematical characterization of the model-dynamics and obtain the following results. The asymptotic dynamics is composed by finitely many stable periodic orbits, whose number and period can be arbitrary large and can diverge in a region of the synaptic weights space, traditionally called the "edge of chaos", a notion mathematically well defined in the present paper. Furthermore, except at the edge of chaos, there is a one-to-one correspondence between the membrane potential trajectories and the raster plot. This shows that the neural code is entirely "in the spikes" in this case. As a key tool, we introduce an order parameter, easy to compute numerically, and closely related to a natural notion of entropy, providing a relevant characterization of the computational capabilities of the network. This allows us to compare the computational capabilities of leaky and IF models and conductance based models. The present study considers networks with constant input, and without time-dependent plasticity, but the framework has been designed for both extensions.

20.
Neural Comput ; 20(12): 2937-66, 2008 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18624656

RESUMO

We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.


Assuntos
Aprendizagem/fisiologia , Matemática , Rede Nervosa/fisiologia , Redes Neurais de Computação , Neurônios/fisiologia , Dinâmica não Linear , Potenciais de Ação/fisiologia , Animais , Entropia , Retroalimentação , Humanos , Sinapses/fisiologia , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...